Baldur Bjarnason, a web developer from Hveragerði, Iceland, recently shared insights on the evolving discourse surrounding fair use in the context of generative AI models. He referenced a paper by Jacqueline Charlesworth, a former general counsel of the U.S. Copyright Office, which critically examines the claims of fair use made by proponents of generative AI. The paper highlights a significant shift in legal scholarship regarding the applicability of fair use to the training of generative models, particularly as a clearer understanding of the technology has emerged. Charlesworth argues that the four factors outlined in Section 107 of the Copyright Act generally weigh against the fair use claims of AI, especially in light of a rapidly changing market for licensed training materials. A key point made in the analysis is that the argument for fair use often relies on a misunderstanding of how AI systems operate. Contrary to the belief that works used for training are discarded post-training, these works are actually integrated into the model and continue to influence its outputs. The process of converting works into tokens and incorporating them into a model does not align with the principles of fair use, as it represents a form of exploitation rather than a transformative use. Charlesworth draws a distinction between the copying of expressive works for functional purposes—such as searching or indexing—and the mass appropriation of creative content for commercial gain. The latter, she argues, lacks precedent in fair use cases and cannot be justified by existing legal frameworks. The paper emphasizes that the act of encoding copyrighted works into a more usable format does not exempt it from being considered infringement. Furthermore, the notion that generative AI's copying should be deemed transformative because it enables generative capabilities is critiqued as a broad and unfounded assertion. This argument essentially posits that the rights of copyright owners should be overridden by the perceived societal benefits of generative AI, which does not hold up as a legal defense in copyright disputes. The narrative pushed by AI companies—that licensing content for training is unfeasible—faces scrutiny, as these companies have shown they can engage in licensing when it serves their interests. This undermines their claims that copyright owners are not losing revenue from the works being appropriated. Overall, Bjarnason encourages readers to explore Charlesworth's paper, noting its accessible language and the importance of understanding the legal implications of generative AI in relation to copyright law.